48 research outputs found

    An Investigation into Trust & Reputation for Agent-Based Virtual Organisations

    No full text
    Trust is a prevalent concept in human society. In essence, it concerns our reliance on the actions of our peers, and the actions of other entities within our environment. For example, we may rely on our car starting in the morning to get to work on time, and on the actions of our fellow drivers, so that we may get there safely. For similar reasons, trust is becoming increasingly important in computing, as systems, such as the Grid, require computing resources to work together seamlessly, across organisational and geographical boundaries (Foster et al., 2001). In this context, the reliability of resources in one organisation cannot be assumed from the point of view of another. Moreover, certain resources may fail more often than others, and for this reason, we argue that software systems must be able to assess the reliability of different resources, so that they may choose which resources to rely upon. With this in mind, our goal here is to develop a mechanism by which software entities can automatically assess the trustworthiness of a given entity (the trustee). In achieving this goal, we have developed a probabilistic framework for assessing trust based on observations of a trustee's past behaviour. Such observations may be accounted for either when they are made directly by the assessing party (the truster), or by a third party (reputation source). In the latter case, our mechanism can cope with the possibility that third party information is unreliable, either because the sender is lying, or because it has a different world view. In this document, we present our framework, and show how it can be applied to cases in which a trustee's actions are represented as binary events; for example, a trustee may cooperate with the truster, or it may defect. We place our work in context, by showing how it constitutes part of a system for managing coalitions of agents, operating in a grid computing environment. We then give an empirical evaluation of our method, which shows that it outperforms the most similar system in the literature, in many important scenarios

    Sequential Decision Making with Untrustworthy Service Providers

    No full text
    In this paper, we deal with the sequential decision making problem of agents operating in computational economies, where there is uncertainty regarding the trustworthiness of service providers populating the environment. Specifically, we propose a generic Bayesian trust model, and formulate the optimal Bayesian solution to the exploration-exploitation problem facing the agents when repeatedly interacting with others in such environments. We then present a computationally tractable Bayesian reinforcement learning algorithm to approximate that solution by taking into account the expected value of perfect information of an agent's actions. Our algorithm is shown to dramatically outperform all previous finalists of the international Agent Reputation and Trust (ART) competition, including the winner from both years the competition has been run

    TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate

    The ART of IAM: The Winning Strategy for the 2006 Competition

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for others, may betray that trust by not performing the actions as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. This situation has led to the development of a number of trust and reputation models, which aim to facilitate an agent's decision making in the face of uncertainty regarding the behaviour of its peers. However, these multifarious models employ a variety of different representations of trust between agents, and measure performance in many different ways. This has made it hard to adequately evaluate the relative properties of different models, raising the need for a common platform on which to compare competing mechanisms. To this end, the ART Testbed Competition has been proposed, in which agents using different trust models compete against each other to provide services in an open marketplace. In this paper, we present the winning strategy for this competition in 2006, provide an analysis of the factors that led to this success, and discuss lessons learnt from the competition about issues of trust in multiagent systems in general. Our strategy, IAM, is Intelligent (using statistical models for opponent modelling), Abstemious (spending its money parsimoniously based on its trust model) and Moral (providing fair and honest feedback to those that request it)

    Sybil tolerance and probabilistic databases to compute web services trust

    Get PDF
    © Springer International Publishing Switzerland 2015. This paper discusses how Sybil attacks can undermine trust management systems and how to respond to these attacks using advanced techniques such as credibility and probabilistic databases. In such attacks end-users have purposely different identities and hence, can provide inconsistent ratings over the same Web Services. Many existing approaches rely on arbitrary choices to filter out Sybil users and reduce their attack capabilities. However this turns out inefficient. Our approach relies on non-Sybil credible users who provide consistent ratings over Web services and hence, can be trusted. To establish these ratings and debunk Sybil users techniques such as fuzzy-clustering, graph search, and probabilistic databases are adopted. A series of experiments are carried out to demonstrate robustness of our trust approach in presence of Sybil attacks

    Stereotype reputation with limited observability

    Get PDF
    Assessing trust and reputation is essential in multi-agent systems where agents must decide who to interact with. Assessment typically relies on the direct experience of a trustor with a trustee agent, or on information from witnesses. Where direct or witness information is unavailable, such as when agent turnover is high, stereotypes learned from common traits and behaviour can provide this information. Such traits may be only partially or subjectively observed, with witnesses not observing traits of some trustees or interpreting their observations differently. Existing stereotype-based techniques are unable to account for such partial observability and subjectivity. In this paper we propose a method for extracting information from witness observations that enables stereotypes to be applied in partially and subjectively observable dynamic environments. Specifically, we present a mechanism for learning translations between observations made by trustor and witness agents with subjective interpretations of traits. We show through simulations that such translation is necessary for reliable reputation assessments in dynamic environments with partial and subjective observability

    Towards a Formal Framework for Computational Trust

    Full text link
    We define a mathematical measure for the quantitative comparison of probabilistic computational trust systems, and use it to compare a well-known class of algorithms based on the so-called beta model. The main novelty is that our approach is formal, rather than based on experimental simulation
    corecore